Cascading randomized weighted majority: A new online ensemble learning algorithm

نویسندگان

  • Mohammadzaman Zamani
  • Hamid Beigy
  • Amirreza Shaban
چکیده

With the increasing volume of data in the world, the best approach for learning from this data is to exploit an online learning algorithm. Online ensemble methods are online algorithms which take advantage of an ensemble of classifiers to predict labels of data. Prediction with expert advice is a well-studied problem in the online ensemble learning literature. The Weighted Majority algorithm and the randomized weighted majority (RWM) are the most well-known solutions to this problem, aiming to converge to the best expert. Since among some expert, The best one does not necessarily have the minimum error in all regions of data space, defining specific regions and converging to the best expert in each of these regions will lead to a better result. In this paper, we aim to resolve this defect of RWM algorithms by proposing a novel online ensemble algorithm to the problem of prediction with expert advice. We propose a cascading version of RWM to achieve not only better experimental results but also a better error bound for sufficiently large datasets.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Cos 511: Theoretical Machine Learning 1 Review of Last Lecture 2 Randomized Weighted Majority Algorithm (rwma)

Recall the online learning model we discussed in the previous lecture: N = # experts For t = 1, 2, . . . , T rounds: 1) each expert i, 1 ≤ i ≤ N , makes a prediction ξi ∈ {0, 1} 2) learner makes a prediction ŷ ∈ {0, 1} 3) observe outcome y ∈ {0, 1} (a mistake happens if ŷ 6= y) With this framework in hand, we investigated a particular algorithm, Weighted Majority Algorithm (WMA), as follows: N ...

متن کامل

Online Learning from Experts: Minimax Regret

In the last three lectures we have been discussing the online learning algorithms where we receive the instance x and then its label y for t = 1, ..., T . Specifically in the last lecture we talked about online learning from experts and online prediction. We saw many algorithms like Halving algorithm, Weighted Majority (WM) algorithm and lastly Weighted Majority Continuous (WMC) algorithm. We a...

متن کامل

Dynamic Weighted Majority: A New Ensemble Method for Tracking Concept Drift

Algorithms for tracking concept drift are important for many applications. We present a general method based on the Weighted Majority algorithm for using any on-line learner for concept drift. Dynamic Weighted Majority (dwm) maintains an ensemble of base learners, predicts using a weighted-majority vote of these “experts”, and dynamically creates and deletes experts in response to changes in pe...

متن کامل

Real-time Ranking of Electrical Feeders using Expert Advice⋆

We are using machine learning to construct a failure-susceptibility ranking of feeders that supply electricity to the boroughs of New York City. The electricity system is inherently dynamic and driven by environmental conditions and other unpredictable factors, and thus the ability to cope with concept drift in real time is central to our solution. Our approach builds on the ensemble-based noti...

متن کامل

Less Is More: A Comprehensive Framework for the Number of Components of Ensemble Classifiers

The number of component classifiers chosen for an ensemble has a great impact on its prediction ability. In this paper, we use a geometric framework for a priori determining the ensemble size, applicable to most of the existing batch and online ensemble classifiers. There are only a limited number of studies on the ensemble size considering Majority Voting (MV) and Weighted Majority Voting (WMV...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Intell. Data Anal.

دوره 20  شماره 

صفحات  -

تاریخ انتشار 2016